25 research outputs found

    Naturalistic Emotion Decoding From Facial Action Sets

    Get PDF
    Researchers have theoretically proposed that humans decode other individuals' emotions or elementary cognitive appraisals from particular sets of facial action units (AUs). However, only a few empirical studies have systematically tested the relationships between the decoding of emotions/appraisals and sets of AUs, and the results are mixed. Furthermore, the previous studies relied on facial expressions of actors and no study used spontaneous and dynamic facial expressions in naturalistic settings. We investigated this issue using video recordings of facial expressions filmed unobtrusively in a real-life emotional situation, specifically loss of luggage at an airport. The AUs observed in the videos were annotated using the Facial Action Coding System. Male participants (n = 98) were asked to decode emotions (e.g., anger) and appraisals (e.g., suddenness) from facial expressions. We explored the relationships between the emotion/appraisal decoding and AUs using stepwise multiple regression analyses. The results revealed that all the rated emotions and appraisals were associated with sets of AUs. The profiles of regression equations showed AUs both consistent and inconsistent with those in theoretical proposals. The results suggest that (1) the decoding of emotions and appraisals in facial expressions is implemented by the perception of set of AUs, and (2) the profiles of such AU sets could be different from previous theories

    Facial feedback affects valence judgments of dynamic and static emotional expressions

    Get PDF
    The ability to judge others' emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers' facial actions and emotion judgments have reported mixed findings. This issue was investigated by measuring emotion judgments in terms of valence and arousal dimensions while comparing dynamic vs. static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36) were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering) and zygomaticus major muscle (cheek raising). They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions

    Perturbance: Unifying Research on Emotion, Intrusive Mentation and Other Psychological Phenomena with AI

    Get PDF
    Intrusive mentation, rumination, obsession, and worry, referred to by Watkins as "repetitive thought" (RT), are of great interest to psychology. This is partly because every typical adult is subject to "RT". In particular, a critical feature of "RT" is also of transdiagnostic significance—for example obsessive compulsive disorder, insomnia and addictions involve unconstructive "RT". We argue that "RT" cannot be understood in isolation of models of whole minds. Researchers must adopt the designer stance in the tradition of Artificial Intelligence augmented by systematic conceptual analysis. This means developing, exploring and implementing cognitive-affective architectures. Empirical research on "RT" needs to be driven by such theories, and theorizing about "RT" needs to consider such data. We draw attention to H-CogAff theory of mind (motive processing, emotion, etc.) and a class of emotions it posits called perturbance (or tertiary emotions), as a foundation for the research programme we advocate. Briefly, a perturbance is a mental state in which motivators tend to disrupt executive processes. We argue that grief, limerence (the attraction phase of romantic love) and a host of other psychological phenomena involving "RT" should be conceptualized in terms of perturbance and related design-based constructs. We call for new taxonomies of "RT" in terms of information processing architectures such as H-CogAff. We claim general theories of emotion also need to recognize perturbance and other architecture-based aspects of emotion. Meanwhile "cognitive" architectures need to consider requirements of autonomous agency, leading to cognitive affective architectures

    Automatic Recognition of Facial Displays of Unfelt Emotions

    Get PDF
    Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datas

    Facial Expressions of Basic Emotions in Japanese Laypeople

    Get PDF
    日本人の表情がエクマンの理論とは異なることを実証 --世界で初めて日本人の基本6感情の表情を報告--. 京都大学プレスリリース. 2019-02-14.Facial expressions that show emotion play an important role in human social interactions. In previous theoretical studies, researchers have suggested that there are universal, prototypical facial expressions specific to basic emotions. However, the results of some empirical studies that tested the production of emotional facial expressions based on particular scenarios only partially supported the theoretical predictions. In addition, all of the previous studies were conducted in Western cultures. We investigated Japanese laypeople (n = 65) to provide further empirical evidence regarding the production of emotional facial expressions. The participants produced facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) in specific scenarios. Under the baseline condition, the participants imitated photographs of prototypical facial expressions. The produced facial expressions were automatically coded using FaceReader in terms of the intensities of emotions and facial action units. In contrast to the photograph condition, where all target emotions were shown clearly, the scenario condition elicited the target emotions clearly only for happy and surprised expressions. The photograph and scenario conditions yielded different profiles for the intensities of emotions and facial action units associated with all of the facial expressions tested. These results provide partial support for the theory of universal, prototypical facial expressions for basic emotions but suggest the possibility that the theory may need to be modified based on empirical evidence

    Non-verbal expression perception and mental state attribution by third parties

    No full text
    Understanding the internal states of others is essential in social exchanges. The aim of the presented thesis is to provide a deeper understanding of the impact nonverbal cues have on the perception of internal states, namely of emotions and associated cognitive appraisals. First, we explored naturalistic behaviour from a hidden camera, described with technical coding systems, FACS for perceived facial muscle movements and a coding scheme defined by ourselves for the hand, arm and torso movements. Participants were asked to judge observed persons' internal states. These descriptions of behaviours and perceptive judgments allow us to make a link between internal state attributions and concrete physical expressions. Second, a novel method was used for expression exploration by transposing naturalistic behaviours to a virtual agent, Greta, which enables a fine tuning of expressions. In order to improve the synchronisation of behaviours the Multimodal Sequential Expression model was created for the Greta agent. Complex expressions were manipulated, one cue at a time, and expression were judged by participants, who were asked to attribute internal states to the agent. Results support the componential approaches to expression, in which particular cues are considered meaningful

    Towards an affective information-processing theory of sleep onset and insomnia

    No full text
    We develop a cognitive-affective theory of sleep onset and insomnia (Beaudoin, 2013, 2014). This somnolent information processing theory is design-based (Artificial Intelligence-inspired). We argue that the two-process model of sleep (Borbély, 1982, 2016) is necessary, but insufficient because it ignores pro-somnolent and insomnolent factors, including affective ones. We argue that the phylogenesis of the human sleep-onset control system (SOCS) faced the design challenge of integrating information from deliberative and reflective (executive) processes and various types of emotion. The core SOCS being evolutionarily ancient and modular, it cannot decode executive information; and executive processes couldn’t fully control lower sleep onset mechanisms. Yet some mutual indirect interactions were required.  Our theory extends and applies the H-CogAff theory of emotions (Sloman, 2003, 2008), while adding sleep onset control mechanisms. We propose that the human SOCS is coarsely sensitive to primary emotions (based on alarms), secondary emotions (involving deliberative, motive management processes), tertiary emotions (perturbance, involving reflective, meta-management processes), moods (Thayer, 2003), interrupt filtering, attributes of motivators currently being managed or suppressed (Beaudoin, 1994), sense-making, and other processes, all of which operate in parallel with each other. Insomnia often involves perturbance, a loss of control of attention. We will use limerence (Tennov, 1979) and grief (Wright, Sloman, Beaudoin, 1996) as examples of perturbant emotions and other affects that can disrupt sleep. We will discuss how new information processing treatments for insomnia that can be supported by mobile apps like mySleepButton®, such as serial diverse imagining (a form of cognitive shuffling, Beaudoin, Digdon, O’Neill, & Racour, 2016), personalized body scans, massage and other treatments might differentially affect the somnolent mechanisms we propose. We will present some of the new questions, from empirical and designer perspectives, that our theory raises about affect, mental architecture, sleep-onset and insomnia

    Evaluation of multimodal sequential expressions of emotions in ECA

    No full text
    A model of multimodal sequential expressions of emotion for an Embodied Conversational Agent was developed. The model is based on video annotations and on descriptions found in the literature. A language has been derived to describe expressions of emotions as a sequence of facial and body movement signals. An evaluation study of our model is presented in this paper. Animations of 8 sequential expressions corresponding to the emotions- anger, anxiety, cheerfulness, embarrassment, panic fear, pride, relief, and tension- were realized with our model. The recognition rate of these expressions is higher than the chance level making us believe that our model is able to generate recognizable expressions of emotions, even for the emotional expressions not considered to be universally recognized
    corecore